44 research outputs found

    Distributed semantic mapping for heterogeneous robotic teams

    Get PDF
    In this paper we summarize our current work on distributed semantic mapping within heterogeneous robot teams in large scale unstructured environments. We extract semantic information from sensor readings and use it to perform robust registration of sub-maps from different agents. We further use it to reduce network traffic by excluding detected areas of high uncertainty. For fast development and verification of our approaches, we employ a multi-robot real-time simulation

    The LRU Rover for Autonomous Planetary Exploration and its Success in the SpaceBotCamp Challenge

    Get PDF
    The task of planetary exploration poses many challenges for a robot system, from weight and size constraints to sensors and actuators suitable for extraterrestrial environment conditions. As there is a significant communication delay to other planets, the efficient operation of a robot system requires a high level of autonomy. In this work, we present the Light Weight Rover Unit (LRU), a small and agile rover prototype that we designed for the challenges of planetary exploration. Its locomotion system with individually steered wheels allows for high maneuverability in rough terrain and the application of stereo cameras as its main sensor ensures the applicability to space missions. We implemented software components for self-localization in GPS-denied environments, environment mapping, object search and localization and for the autonomous pickup and assembly of objects with its arm. Additional high-level mission control components facilitate both autonomous behavior and remote monitoring of the system state over a delayed communication link. We successfully demonstrated the autonomous capabilities of our LRU at the SpaceBotCamp challenge, a national robotics contest with focus on autonomous planetary exploration. A robot had to autonomously explore a moon-like rough-terrain environment, locate and collect two objects and assemble them after transport to a third object - which the LRU did on its first try, in half of the time and fully autonomous

    Testing for the MMX Rover Autonomous Navigation Experiment on Phobos

    Get PDF
    The MMX rover will explore the surface of Phobos, Mars´ bigger moon. It will use its stereo cameras for perceiving the environment, enabling the use of vision based autonomous navigation algorithms. The German Aerospace Center (DLR) is currently developing the corresponding autonomous navigation experiment that will allow the rover to efficiently explore the surface of Phobos, despite limited communication with Earth and long turn-around times for operations. This paper discusses our testing strategy regarding the autonomous navigation solution. We present our general testing strategy for the software considering a development approach with agile aspects. We detail, how we ensure successful integration with the rover system despite having limited access to the flight hardware. We furthermore discuss, what environmental conditions on Phobos pose a potential risk for the navigation algorithms and how we test for these accordingly. Our testing is mostly data set-based and we describe our approaches for recording navigation data that is representative both for the rover system and also for the Phobos environment. Finally, we make the corresponding data set publicly available and provide an overview on its content

    Mobility on the Surface of Phobos for the MMX Rover - Simulation-aided Movement planning

    Get PDF
    The MMX Rover, recently named IDEFIX, will be the first wheeled robotic system to be operated in a milli-g environment. The mobility in this environment, particularly in combination with the interrupted communication schedule and the activation of on-board autonomous functions such as attitude control requires efficient planning. The Mobility Group within the MMX Rovers Team is tasked with proposing optimal solutions to move the rover safely and efficiently to its destination so that it may achieve its scientific goals. These movements combine various commands to the locomotion system and to the navigation systems developed by both institutions. In the mission's early phase, these actions will rely heavily on manual driving commands to the locomotion system until the rover behavior and environment assumptions are confirmed. Planning safe and efficient rover movements is a multi-step process. This paper focuses on the challenges and limitations in sequencing movements for a Rover on Phobos in the context of the MMX Mission. The context in which this process takes place is described in terms of available data and operational constraints

    Simulation of Artificial Intelligence Agents using Modelica and the DLR Visualization Library

    Get PDF
    This paper introduces a scheme for testing artificial intelligence algorithms of autonomous systems using Modelica and the DLR Visualization Library. The simulation concept follows the ’Software-in-the-loop’ principle, whereas no adaptations are made to the tested algorithms. The environment is replaced by an artificial world and the rest of the autonomous system is modeled in Modelica. The scheme is introduced and explained by using the example of the ROboMObil, which is a robotic electric vehicle developed by the DLR’s Robotics and Mechatronics Center

    Mixed Reality for Intuitive Photo-Realistic 3D-Model Generation

    Get PDF
    Appropriate user-interfaces are mandatory for an intuitive and efficient digitisation of real objects with a hand-held scanning device. We discuss two central aspects involved in this process, i.e., view-planning and navigation. We claim that the streaming generation of the 3D model and the immediate visualisation best support the user in the view-planning task. In addition, we promote the mixed visualisation of the real and the virtual object for a good navigation support. The paper outlines the components of the used scanning device and the processing pipeline, consisting of synchronisation, calibration, streaming surface generation, and texture mapping

    Tackling Multi-sensory 3D Data Acquisition and Fusion

    Get PDF
    The development of applications for multi-sensor data fusion typically faces heterogeneous hardware components, a variety of sensing principles and limited computational resources. We present a concept for synchronization and communication which tackles these challenges in multi-sensor systems in a unified manner. Here, a combination of hardware synchronization and deterministic software signals is promoted for global synchronization. Patterns of event-driven communication ensure that sensor data processing and evaluation are not bound to runtime constraints induced by data acquisition anymore. The combination of unified range and pose data description, event-driven communication, and global synchronization allows to build 3dsensing applications for various tasks. The proposed concept is implemented and evaluated for a variety of applications based on the DLR Multisensory 3D-Modeller. Extendability to other range and pose sensors is straight forward
    corecore